11 research outputs found

    Genetic Algorithm-based Mapper to Support Multiple Concurrent Users on Wireless Testbeds

    Full text link
    Communication and networking research introduces new protocols and standards with an increasing number of researchers relying on real experiments rather than simulations to evaluate the performance of their new protocols. A number of testbeds are currently available for this purpose and a growing number of users are requesting access to those testbeds. This motivates the need for better utilization of the testbeds by allowing concurrent experimentations. In this work, we introduce a novel mapping algorithm that aims to maximize wireless testbed utilization using frequency slicing of the spectrum resources. The mapper employs genetic algorithm to find the best combination of requests that can be served concurrently, after getting all possible mappings of each request via an induced sub-graph isomorphism stage. The proposed mapper is tested on grid testbeds and randomly generated topologies. The solution of our mapper is compared to the optimal one, obtained through a brute-force search, and was able to serve the same number of requests in 82.96% of testing scenarios. Furthermore, we show the effect of the careful design of testbed topology on enhancing the testbed utilization by applying our mapper on a carefully positioned 8-nodes testbed. In addition, our proposed approach for testbed slicing and requests mapping has shown an improved performance in terms of total served requests, about five folds, compared to the simple allocation policy with no slicing.Comment: IEEE Wireless Communications and Networking Conference (WCNC) 201

    SODIM: Service Oriented Data Integration based on MapReduce

    No full text
    Data integration has become a backbone for many essential and widely used services. These services depend on integrating data from multiple sources in a fast and efficient way to be able to provide the accepted level of service performance it is committed to. As the size of data available on different environments increases, and systems are heterogeneous and autonomous, data integration becomes a crucial part of most modern systems. Data integration systems can benefit from innovative dynamic infrastructure solutions such as Clouds, with its more agility, lower cost, device independency, location independency, and scalability. This study consolidates the data integration system, Service Orientation, and distributed processing to develop a new data integration system called Service Oriented Data Integration based on MapReduce (SODIM) that improves the system performance, especially with large number of data sources, and that can efficiently be hosted on modern dynamic infrastructures as Clouds

    VNetIntSim: An Integrated Simulation Platform to Model Transportation and Communication Networks

    No full text
    The paper introduces a Vehicular Network Integrated Simulator (VNetIntSim) that integrates transportation modelling with Vehicular Ad Hoc Network (VANET) modelling. Specifically, VNetIntSim integrates the OPNET software, a communication network simulator, and the INTEGRATION software, a microscopic traffic simulation software. The INTEGRATION software simulates the movement of travellers and vehicles, while the OPNET software models the data exchange through the communication system. Information is exchanged between the two simulators as needed. As a proof of concept, the VNetIntSim is used to quantify the impact of mobility parameters (traffic stream speed and density) on the communication system performance, and more specifically on the data routing (packet drops and route discovery time)

    Software bug prediction using weighted majority voting techniques

    No full text
    Mining software repositories is a growing research field where rich data available in the different development software repositories, are analyzed and cross-linked to uncover useful information. Bug prediction is one of the potential benefits that can be gained through mining software repositories. Predicting potential defects early as they are introduced to the version control system would definitely help in saving time and effort during testing or maintenance phases. In this paper, defect prediction models that uses ensemble classification techniques have been proposed. The proposed models have been applied using different sets of software metrics as attributes of the classification techniques and tested on datasets of different sizes. The results show that change metrics outperform static code metrics and the combined model of change and static code metrics. Ensembles tend to be more accurate than their base classifiers. Defect prediction models using change metrics and ensemble classifiers have revealed the best performance, especially when the datasets used have imbalanced class distribution. Keywords: Modeling and prediction, Product metrics, Process metrics, Classifier design and evaluatio

    AR-Sanad 280K: A Novel 280K Artificial Sanads Dataset for Hadith Narrator Disambiguation

    No full text
    Determining hadith authenticity is vitally important in the Islamic religion because hadiths record the sayings and actions of Prophet Muhammad (PBUH), and they are the second source of Islamic teachings following the Quran. When authenticating a hadith, the reliability of the hadith narrators is a big factor that hadith scholars consider. However, many narrators share similar names, and the narrators’ full names are not usually included in the narration chains of hadiths. Thus, first, ambiguous narrators need to be identified. Then, their reliability level can be determined. There are no available datasets that could help address this problem of identifying narrators. Here, we present a new dataset that contains narration chains (sanads) with identified narrators. The AR-Sanad 280K dataset has around 280K artificial sanads and could be used to identify 18,298 narrators. After creating the AR-Sanad 280K dataset, we address the narrator disambiguation in several experimental setups. The hadith narrator disambiguation is modeled as a multiclass classification problem with 18,298 class labels. We test different representations and models in our experiments. The best results were achieved by finetuning BERT-Based deep learning model (AraBERT). We obtained a 92.9 Micro F1 score and 30.2 sanad error rate (SER) on the validation set of our artificial sanads AR-Sanad 280K dataset. Furthermore, we extracted a real test set from the sanads of the famous six books in Islamic hadith. We evaluated the best model on the real test data, and we achieved 83.5 Micro F1 score and 60.6 sanad error rate

    A Secure Blockchain Framework for Storing Historical Text: A Case Study of the Holy Hadith

    No full text
    Historical texts are one of the main pillars for understanding current civilization and are used to reference different aspects. Hadiths are an example of one of the historical texts that should be securely preserved. Due to the expansion of the online resources, fabrications and alterations of fake Hadiths are easily feasible. Therefore, it has become more challenging to authenticate the online available Hadith contents and much harder to keep these authenticated results secure and unmanipulated. In this research, we are using the capabilities of the distributed blockchain technology to securely archive the Hadith and its level of authenticity in a blockchain. We selected a permissioned blockchain customized model in which the main entities approving the level of authenticity of the Hadith are well-established and specialized institutions in the main Islamic countries that can apply their own Hadith validation model. The proposed solution guarantees its integrity using the crowd wisdom represented in the selected nodes in the blockchain, which uses voting algorithms to decide the insertion of any new Hadiths into the database. This technique secures data integrity at any given time. If any organization’s credentials are compromised and used to update the data maliciously, 50% + 1 approval from the whole network nodes will be required. In case of any malicious or misguided information during the state of reaching consensus, the system will self-heal using practical Byzantine Fault Tolerance (pBFT). We evaluated the proposed framework’s read/write performance and found it adequate for the operational requirements
    corecore